32 research outputs found

    Projection-Based and Look Ahead Strategies for Atom Selection

    Full text link
    In this paper, we improve iterative greedy search algorithms in which atoms are selected serially over iterations, i.e., one-by-one over iterations. For serial atom selection, we devise two new schemes to select an atom from a set of potential atoms in each iteration. The two new schemes lead to two new algorithms. For both the algorithms, in each iteration, the set of potential atoms is found using a standard matched filter. In case of the first scheme, we propose an orthogonal projection strategy that selects an atom from the set of potential atoms. Then, for the second scheme, we propose a look ahead strategy such that the selection of an atom in the current iteration has an effect on the future iterations. The use of look ahead strategy requires a higher computational resource. To achieve a trade-off between performance and complexity, we use the two new schemes in cascade and develop a third new algorithm. Through experimental evaluations, we compare the proposed algorithms with existing greedy search and convex relaxation algorithms.Comment: sparsity, compressive sensing; IEEE Trans on Signal Processing 201

    Presidential Office: Foreword

    Get PDF
    We study the fundamental relationship between two relevant quantities in compressive sensing: the measurement rate, which characterizes the asymptotic behavior of the dimensions of the measurement matrix in terms of the ratio m/ log n (m being the number of measurements and n the dimension of the sparse signal), and the mean square estimation error. First, we use an information-theoretic approach to derive sufficient conditions on the measurement rate to reliably recover a part of the support set that represents a certain fraction of the total signal power when the sparsity level is fixed. Second, we characterize the mean square error of an estimator that uses partial support set information. Using these two parts, we derive a tradeoff between the measurement rate and the mean square error. This tradeoff is achievable using a two-step approach: first support set recovery, then estimation of the active components. Finally, for both deterministic and random signals, we perform a numerical evaluation to verify the advantages of the methods based on partial support set recovery.QC 20140617</p

    Travel Writing and Rivers

    Get PDF

    Greedy Algorithms for Distributed Compressed Sensing

    No full text
    Compressed sensing (CS) is a recently invented sub-sampling technique that utilizes sparsity in full signals. Most natural signals possess this sparsity property. From a sub-sampled vector, some CS reconstruction algorithm is used to recover the full signal. One class of reconstruction algorithms is formed by the greedy pursuit, or simply greedy, algorithms, which is popular due to low complexity and good performance. Meanwhile, in sensor networks, sensor nodes monitor natural data for estimation or detection. One application of sensor networking is in cognitive radio networks, where sensor nodes want to estimate a power spectral density. The data measured by different sensors in such networks are typically correlated. Another type are multiple processor networks of computational nodes that cooperate to solve problems too difficult for the nodes to solve individually. In this thesis, we mainly consider greedy algorithms for distributed CS. To this end, we begin with a review of current knowledge in the field. Here, we also introduce signal models to model correlation and network models for simulation of network. We proceed by considering two applications; power spectrum density estimation and distributed reconstruction algorithms for multiple processor networks. Then, we delve deeper into the greedy algorithms with the objective to improve reconstruction performance; this naturally comes at the expense of increased computational complexity. The main objective of the thesis is to design greedy algorithms for distributed CS that exploit data correlation in sensor networks to improve performance. We develop several such algorithms, where a key element is to use intuitive democratic voting principles. Finally, we show the merit of such voting principles by probabilistic analysis based on a new input/output system model of greedy algorithms in CS. By comparing the new single sensor algorithms to well known greedy pursuit algorithms already present in the literature, we see that the goal of improved performance is achieved. We compare complexity using big-O analysis where the increased complexity is characterized. Using simulations we verify the performance and confirm complexity claims. The complexity of distributed algorithms is typically harder to analyze since it depends on the specific problem and network topology. However, when analysis is not possible, we provide extensive simulation results. No distributed algorithms based on the signal-models used in this thesis were so far available in the literature. Therefore, we compare our algorithms to standard single-sensor algorithms, and our results can then easily be used as benchmarks for future research. Compared to the stand-alone case, the new distributed algorithms provide significant performance gains. Throughout the thesis, we strive to present the work in a smooth flow of algorithm design, simulation results and analysis.Compressed sensing (CS) Àr en nyutvecklad teknik som utnyttjar gleshet i stora undersamplade signaler. MÄnga intressanta signaler besitter dessa glesa egenskaper. UtifrÄn en undersamplad vektor Äterskapar CS-algoritmer hela den sökta signalen. En klass av rekonstruktionsalgoritmer Àr de sÄ kallade giriga algoritmerna, som blivit populÀra tack vare lÄg komplexitet och god prestanda. CS kan anvÀndas i vissa typer av nÀtverk för att detektera eller estimera stora signaler. En typ av nÀtverk dÀr detta kan göras Àr i sensornÀtverk för kognitiv radio, dÀr man anvÀnder sensorer för att estimera effektspektrum. Datan som samplas av de olika sensorerna i sÄdana nÀtverk Àr typiskt korrelerad. En annan typ av nÀtverk Àr multiprocessornÀtverk bestÄende av distribuerade berÀkningsnoder, dÀr noderna genom samarbete kan lösa svÄrare problem Àn de kan göra ensamma. Avhandlingen kommer frÀmst att behandla giriga algoritmer för distribuerade CS-problem. Vi börjar med en överblick av nuvarande kunskap inom omrÄdet. HÀr introducerar vi signalmodeller för korrelation och nÀtverksmodeller som anvÀnds för simulering i nÀtverk. Vi fortsÀtter med att studera tvÄ tillÀmpningar; estimering av effektspektrum och en distribuerad Äterskapningsalgoritm för multiprocessornÀtverk. DÀrefter tar vi ett djupare steg i studien av giriga algoritmer, dÀr vi utvecklar nya algoritmer med förbÀttrad prestanda, detta till priset av ökad berÀkningskomplexitet. HuvudmÄlet med avhandlingen Àr giriga algoritmer för distribuerad CS, dÀr algoritmerna utnyttjar datakorrelationen i sensornÀtverk. Vi utvecklar flera sÄdana algoritmer, dÀr en huvudingrediens Àr att anvÀnda demokratiska röstningsalgoritmer. Vi analyserar sedan denna typ av röstningsalgoritmer genom att introducera en ingÄng/utgÄngs modell. Analysen visar att algoritmerna ger bra resultat. Genom att jÀmföra algoritmer för enskilda sensorer med redan befintliga algoritmer i litteraturen ser vi att mÄlet med ökad prestanda uppnÄs. Vi karaktÀriserar ocksÄ komplexiteten. Genom simulationer verifierar vi bÄde prestandan och komplexiteten. Att analysera komplexitet hos distribuerade algoritmer Àr generellt svÄrare eftersom den beror pÄ specifik signalrealisation, nÀtverkstopologi och andra parametrar. I de fall dÀr vi inte kan göra analys presenterar vi istÀllet genomgÄende simuleringsresultat. Vi jÀmför vÄra algoritmer med de vanligaste algoritmerna för enskilda sensorsystem, och vÄra resultat kan dÀrför enkelt anvÀndas som referens för framtida forskning. JÀmfört med prestandan för enskilda sensorer visar de nya distribuerade algoritmerna markant förbÀttring

    Compressed Sensing : Algorithms and Applications

    No full text
    The theoretical problem of finding the solution to an underdeterminedset of linear equations has for several years attracted considerable attentionin the literature. This problem has many practical applications.One example of such an application is compressed sensing (cs), whichhas the potential to revolutionize how we acquire and process signals. Ina general cs setup, few measurement coefficients are available and thetask is to reconstruct a larger, sparse signal.In this thesis we focus on algorithm design and selected applicationsfor cs. The contributions of the thesis appear in the following order:(1) We study an application where cs can be used to relax the necessityof fast sampling for power spectral density estimation problems. Inthis application we show by experimental evaluation that we can gainan order of magnitude in reduced sampling frequency. (2) In order toimprove cs recovery performance, we extend simple well-known recoveryalgorithms by introducing a look-ahead concept. From simulations it isobserved that the additional complexity results in significant improvementsin recovery performance. (3) For sensor networks, we extend thecurrent framework of cs by introducing a new general network modelwhich is suitable for modeling several cs sensor nodes with correlatedmeasurements. Using this signal model we then develop several centralizedand distributed cs recovery algorithms. We find that both thecentralized and distributed algorithms achieve a significant gain in recoveryperformance compared to the standard, disconnected, algorithms.For the distributed case, we also see that as the network connectivity increases,the performance rapidly converges to the performance of thecentralized solution.QC 20120229</p

    Beamformers For Sparse Recovery

    No full text
    In sparse recovery from measurement data a common approach is to use greedy pursuit reconstruction algorithms. Most of these algorithms have a correlation filter for detecting active components in the sparse data. In this paper, we show how modifications can be made for the greedy pursuit algorithms so that they use beamformers insteadof the standard correlation filter. Using these beamformers, improved performance in the algorithms is obtained. In particular, we discuss beamformers for the average and worst case scenario and give methods for constructing them.QC 20131022</p
    corecore